13 research outputs found

    Bayesian Inference Semantics: A Modelling System and A Test Suite

    Get PDF
    We present BIS, a Bayesian Inference Seman- tics, for probabilistic reasoning in natural lan- guage. The current system is based on the framework of Bernardy et al. (2018), but de- parts from it in important respects. BIS makes use of Bayesian learning for inferring a hy- pothesis from premises. This involves estimat- ing the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syn- tactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phe- nomena, including frequency adverbs, gener- alised quantifiers, generics, and vague predi- cates. It performs well on a number of interest- ing probabilistic reasoning tasks. It also sus- tains most classically valid inferences (instan- tiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and clas- sical inference patterns

    Adverbs in a Modern Type Theory

    Full text link
    Abstract. This paper is the first attempt to deal with aspects of the semantics of adverbs within a modern type theoretical setting. A number of issues pertaining to the semantics of different classes of adverbs like verididality and intensionality will be discussed and further shown to be captured straightforwardly within a modern type theoretical setting. In particular, I look at the issue of veridicality and show that the inferences associated with veridical adverbs can be dealt with via typing alone, i.e. without the aid of meaning postulates. In case of intensional adverbs like intentionally or allegedly, I show that these can be captured by making use of the type theoretical notion of context, i.e. without the use of possible worlds.

    Completability vs (In)completeness

    Get PDF
    In everyday conversation, no notion of “complete sentence” is required for syntactic licensing. However, so-called “fragmentary”, “incomplete”, and abandoned utterances are problematic for standard formalisms. When contextualised, such data show that (a) non-sentential utterances are adequate to underpin agent coordination, while (b) all linguistic dependencies can be systematically distributed across participants and turns. Standard models have problems accounting for such data because their notions of ‘constituency’ and ‘syntactic domain’ are independent of performance considerations. Concomitantly, we argue that no notion of “full proposition” or encoded speech act is necessary for successful interaction: strings, contents, and joint actions emerge in conversation without any single participant having envisaged in advance the outcome of their own or their interlocutors’ actions. Nonetheless, morphosyntactic and semantic licensing mechanisms need to apply incrementally and subsententially. We argue that, while a representational level of abstract syntax, divorced from conceptual structure and physical action, impedes natural accounts of subsentential coordination phenomena, a view of grammar as a “skill” employing domain-general mechanisms, rather than fixed form-meaning mappings, is needed instead. We provide a sketch of a predictive and incremental architecture (Dynamic Syntax) within which underspecification and time-relative update of meanings and utterances constitute the sole concept of “syntax”

    Formal Semantics in Modern Type Theories: Is It Model-Theoretic, Proof-Theoretic, or Both?

    No full text
    Abstract. In this talk, we contend that, for NLs, the divide between model-theoretic semantics and proof-theoretic semantics has not been well-understood. In particular, the formal semantics based on modern type theories (MTTs) may be seen as both model-theoretic and proof-theoretic. To be more precise, it may be seen both ways in the sense that the NL semantics can first be represented in an MTT in a model-theoretic way and then the semantic representations can be understood inferentially in a proof-theoretic way. Considered in this way, MTTs arguably have unique advantages when employed for formal semantics.

    Type Theories and Lexical Networks : using Serious Games as the basis for Multi-Sorted Typed Systems

    No full text
    In this paper, we show how a rich lexico-semantic network which Has been built using serious games, JeuxDeMots, can help us in grounding our semantic ontologies in doing formal semantics using rich or modern type theories (type theories within the tradition of Martin Löf). We discuss the issue of base types, adjectival and verbal types, hyperonymy/hyponymy relations as well as more advanced issues like homophony and polysemy. We show how one can take advantage of this wealth of lexical semantics in a formal compositional semantics framework. We argue that this is a way to sidestep the problem of deciding what the type ontology should look like once a move to a many sorted type system has been made. Furthermore, we show how this kind of information can be extracted from a lexico-semantic Network like JeuxDeMots and inserted into a proof-assistant like Coq in order to perform reasoning tasks

    Bayesian Inference Semantics for Natural Language

    No full text
    In this chapter we present Bayesian Inference Semantics (BIS). This system assigns probability conditions to inferences, and it defines functions for the typed constituents of sentences that generate these conditions compositionally. This framework permits us to capture vagueness through probability distributions for predicates, and the sentential assertions that are constructed from them. Vagueness is, then, a core property of expressions in our account. This allows us to provide natural representations of scalar adjectives and vague classifier terms, while these are problematic for classical semantic theories. Using probability distributions over the definitions of predicates also permits us to handle the sorites paradox in a straightforward way. We sustain the fuzzy boundaries of classifiers through these distributions, without invoking sharp borders between objects to which classifier terms apply, and those to which they do not

    Measuring Linguistic Complexity: Introducing a New Categorial Metric

    No full text
    International audienceThis paper provides a computable quantitative measure which accounts for the difficulty in human processing of sentences: why is a sentence harder to parse than another one? Why is some reading of a sentence easier than another one? We take for granted psycholinguistic results on human processing complexity like the ones by Gibson. We define a new metric which uses Categorial Proof Nets to correctly model Gibson's account in his Dependency Locality Theory. The proposed metric correctly predicts some performance phenomena such as structures with embedded pronouns, garden paths, unacceptable center embeddings, preference for lower attachment and passive paraphrases acceptability. Our proposal gets closer to the modern computational psycholinguistic theories, while it opens the door to include semantic complexity, because of the straightforward syntax-semantics interface in categorial grammars
    corecore